AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
4-bit quantized efficient inference

# 4-bit quantized efficient inference

Llama3.2 3b TrSummarization Unsloth GGUF
Apache-2.0
A Turkish text generation model fine-tuned from unsloth/Llama-3.2-3B-bnb-4bit, specializing in summarization tasks.
Large Language Model Other
L
anilguven
48
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase